The stakes are high because Kochava’s secretive data acquisition and AI-aided analytics practices are commonplace in the global location data market. (AFP)News 

FTC case against Kochava reveals shocking extent of data brokers’ knowledge about individuals

Kochava, a company that claims to be the top player in mobile app data analytics, is currently embroiled in a legal dispute with the Federal Trade Commission. This case has the potential to bring about significant transformations in the worldwide data marketplace and influence Congress’ stance on artificial intelligence and data privacy. The outcome is crucial as Kochava’s covert methods of obtaining data and utilizing AI for analysis are widely used in the global location data industry. Alongside lesser-known data brokers, major players like Foursquare and data market exchanges such as Amazon’s AWS Data Exchange are also part of the mobile data market.

The FTC’s recently unopened amended complaint against Kochava makes clear that there is some truth to Kochava’s advertising: it can provide data on “any channel, any device, any audience” and buyers can “measure everything with Kochava.”

Separately, the FTC is touting a settlement it just reached with data broker Outlogic, which it calls “the first ban on the use and sale of sensitive location data.” Outlogic must destroy its location data and may not collect or use such data to determine who enters and leaves sensitive locations such as health centers, homeless and domestic violence shelters, and places of worship.

According to the FTC and proposed class action lawsuits against Kochava on behalf of adults and children, the company secretly collects, without prior notice or consent, and otherwise obtains vast amounts of consumer location and personal information. It then analyzes this data with artificial intelligence, which allows it to predict and influence consumer behavior in an impressively diverse and alarmingly invasive way, and delivers it to sales.

Kochava has denied the FTC’s allegations.

The FTC says Kochava sells a “360-degree view” of individuals and advertises that it can “associate precise geographic information with email, demographics, devices, households and channels.” In other words, Kochava takes location data, combines it with other data, and links it to consumer identities.

The data it sells reveals specific information about a person, such as visits to hospitals, “reproductive health clinics, places of worship, homeless and domestic violence shelters, and addiction recovery facilities.” Additionally, the FTC says that by selling such detailed information about people, “Kochava enables others to identify individuals and expose them to threats of stigmatization, stalking, discrimination, job loss, and even physical violence.”

I am a lawyer and law professor who practices, teaches and researches artificial intelligence, data protection and evidence. These complaints, in my view, highlight that US law has not kept pace with the management of commercially available data or artificial intelligence.

Most privacy regulations in the United States were created in the pre-AI era, and there is no comprehensive federal law addressing AI-based data processing. Congress is working to regulate the use of artificial intelligence in decision-making, such as hiring and sentencing. The use of artificial intelligence is also aimed at providing public transparency. But Congress has yet to pass the legislation.

What court documents reveal

According to the FTC, Kochava collects and then sells its “Kochava Collective” data, which includes precise geographic location data, comprehensive profiles of individual consumers, consumer mobile app usage data, and Kochava “audience segments.”

The FTC says Kochava’s audience segments can be based on “behavioral” and sensitive information such as gender identity, political and religious affiliation, race, visits to hospitals and abortion clinics, as well as people’s medical information such as periods and ovulation, and even cancer treatments. By selecting specific audience segments, Kochava’s customers can identify and target extremely precise groups.

This could include, for example, people who identify as “other” or all pregnant women who are African-American and Muslim. The FTC says that selected audience segments can be narrowed down to a specific geographic area or possibly even a specific building.

With the tag, the FTC explains that Kochava’s customers can obtain name, home address, email address, financial status and stability, and much more information about people in selected groups. This data is bought by organizations such as advertisers, insurance companies, and political campaigns that seek to categorize and narrowly target people. The FTC also says people who want to harm others can buy it.

How does Kochava acquire such sensitive information

The FTC says Kochava obtains consumer data in two ways: through Kochava’s software development kits, which it offers to app developers, and directly from other data brokers. The FTC says software development kits supplied by Kochava have been installed in more than 10,000 applications worldwide. Embedded with Kochava’s coding, Kochava’s sets collect data in batches and send it back to Kochava without the consumer being told or consented to the data collection.

Another lawsuit filed against Kochava in California alleges similar allegations of secret data collection and analysis, and that Kochava sells customized data feeds based on highly sensitive and private information that is precisely tailored to the needs of its customers.

AI is piercing your privacy

The FTC’s complaint also shows how evolving artificial intelligence tools are enabling a new phase in data analysis. Generative AI’s ability to process massive amounts of data is shaping what can be done with and learned from mobile data in ways that violate privacy. This includes inferring and disclosing sensitive or otherwise legally protected information, such as patient records and images.

Artificial intelligence offers the ability to both know and predict almost anything about individuals and groups, even highly sensitive behavior. It also enables the manipulation of the behavior of individuals and groups, which causes the AI tool to make decisions that favor certain users.

This type of “AI-coordinated manipulation” can unknowingly override your decision-making abilities.

Privacy in balance

The FTC enforces laws against unfair and deceptive business practices and notified Kochava in 2022 that the company was in violation. Both parties have gains and losses in the current case. Senior U.S. District Judge B. Lynn Winmill, who is overseeing the case, dismissed the FTC’s first complaint and demanded more facts from the FTC. The commission filed an amended complaint that contained much more specific allegations.

Winmill has yet to rule on Kochava’s second motion to dismiss the FTC case, but as of the January 3, 2024 case, the parties are continuing to settle. A trial date of 2025 is expected, but no date has yet been set.

For now, businesses, privacy advocates and policymakers are likely to follow this case. Its outcome, combined with the proposed legislation and the FTC’s focus on generative AI, data and privacy, could lead to major changes in how companies acquire data, the ways in which AI tools can be used to analyze data, and what data can legally be used on machines. – and human-based data analytics. (Discourse)

Related posts

Leave a Comment